162 research outputs found
Video Acceleration Magnification
The ability to amplify or reduce subtle image changes over time is useful in
contexts such as video editing, medical video analysis, product quality control
and sports. In these contexts there is often large motion present which
severely distorts current video amplification methods that magnify change
linearly. In this work we propose a method to cope with large motions while
still magnifying small changes. We make the following two observations: i)
large motions are linear on the temporal scale of the small changes; ii) small
changes deviate from this linearity. We ignore linear motion and propose to
magnify acceleration. Our method is pure Eulerian and does not require any
optical flow, temporal alignment or region annotations. We link temporal
second-order derivative filtering to spatial acceleration magnification. We
apply our method to moving objects where we show motion magnification and color
magnification. We provide quantitative as well as qualitative evidence for our
method while comparing to the state-of-the-art.Comment: Accepted paper at CVPR 2017. Project webpage:
http://acceleration-magnification.github.io
No Spare Parts: Sharing Part Detectors for Image Categorization
This work aims for image categorization using a representation of distinctive
parts. Different from existing part-based work, we argue that parts are
naturally shared between image categories and should be modeled as such. We
motivate our approach with a quantitative and qualitative analysis by
backtracking where selected parts come from. Our analysis shows that in
addition to the category parts defining the class, the parts coming from the
background context and parts from other image categories improve categorization
performance. Part selection should not be done separately for each category,
but instead be shared and optimized over all categories. To incorporate part
sharing between categories, we present an algorithm based on AdaBoost to
jointly optimize part sharing and selection, as well as fusion with the global
image representation. We achieve results competitive to the state-of-the-art on
object, scene, and action categories, further improving over deep convolutional
neural networks
Objects2action: Classifying and localizing actions without any video example
The goal of this paper is to recognize actions in video without the need for
examples. Different from traditional zero-shot approaches we do not demand the
design and specification of attribute classifiers and class-to-attribute
mappings to allow for transfer from seen classes to unseen classes. Our key
contribution is objects2action, a semantic word embedding that is spanned by a
skip-gram model of thousands of object categories. Action labels are assigned
to an object encoding of unseen video based on a convex combination of action
and object affinities. Our semantic embedding has three main characteristics to
accommodate for the specifics of actions. First, we propose a mechanism to
exploit multiple-word descriptions of actions and objects. Second, we
incorporate the automated selection of the most responsive objects per action.
And finally, we demonstrate how to extend our zero-shot approach to the
spatio-temporal localization of actions in video. Experiments on four action
datasets demonstrate the potential of our approach
Exploiting Photographic Style for Category-Level Image Classification by Generalizing the Spatial Pyramid
International audienceThis paper investigates the use of photographic style for category-level image classi cation. Speci cally, we exploit the assumption that images within a category share a similar style de ned by attributes such as colorfulness, lighting, depth of eld, viewpoint and saliency. For these style attributes we create correspondences across images by a generalized spatial pyramid matching scheme. Where the spatial pyramid groups features spatially, we allow more general feature grouping and in this paper we focus on grouping images on photographic style. We evaluate our approach in an object classi cation task and investigate style differences between professional and amateur photographs. We show that a generalized pyramid with style-based attributes improves performance on the professional Corel and amateur Pascal VOC 2009 image datasets
- …